Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
2023 Annual Reliability and Maintainability Symposium, RAMS 2023 ; 2023-January, 2023.
Article in English | Scopus | ID: covidwho-2295160

ABSTRACT

Risk assessment, particularly when using simulations, requires that the analyst develops estimates of expected, low, and high values for inputs. Mean and standard deviation are often used to assess the variability of metrics, assuming that the underlying distribution is normal. However, it is increasingly realized that non-normal distributions are common and important. If data are available, it is simple and straightforward to check this assumption by computing higher order moments.Claude Shannon [1], [2] proposed that the information entropy for a set of N discrete events can be measured by (Formula Presented) E. T. Jaynes [3] proposed that, if data is available, information entropy can be maximized using Lagrangian multipliers and that the resulting probability distribution maximizes the uncertainty of that distribution given the data.In order to use entropy maximization, it is required to define constraints such that Σpi = 1, plus constraints on the mean, variance, skew, kurtosis, and other moments. This problem does not have a closed form solution but can be solved iteratively in a spreadsheet.The problem can be set up as follows for mean bar x and variance s2: (Formula Presented) This basic formulation models the normal distribution. The importance of non-normality can be estimated by adding higher order moments as desired. For n ≥ 3, constraints can be added using: (Formula Presented) where Mn is the computed nth moment of the data set.Differentiating ∂H/∂pi = 0 maximizes information entropy, and the resulting probability distribution has the most uncertainty given the observed data.This suggests that it is possible to develop an estimate of the distribution where some values are underrepresented in the sample. It further suggests that unusual or atypical results can be better estimated.This paper uses the method of maximizing entropy to model observed data and will study two time series applications. One problem of interest is sequential acquisition of data. For example, time to failure for a device may be a metric of concern. A maximum entropy model provides an empirical estimate of the distribution of this metric. A second problem of interest is forecasting the distribution of a metric at some point in the future. This applies to supply chain management. Project sponsors prepare cost and schedule estimates well in advance of placing the orders for the materials used in those projects. Management reserves for cost and schedule are typically set by subject matter experts, and recent experience (e.g., supply chain disruptions due to the COVID19 pandemic) may overemphasize current data when developing risk assessments. This approach offers a datadriven way to empirically develop risk assessments. © 2023 IEEE.

2.
Energies ; 16(1):277, 2023.
Article in English | ProQuest Central | ID: covidwho-2199927

ABSTRACT

The connection between Earth's global temperature and carbon dioxide (CO2) emissions is one of the highest challenges in climate change science since there is some controversy about the real impact of CO2 emissions on the increase of global temperature. This work contributes to the existing literature by analyzing the relationship between CO2 emissions and the Earth's global temperature for 61 years, providing a recent review of the emerging literature as well. Through a statistical approach based on maximum entropy, this study supports the results of other techniques that identify a positive impact of CO2 in the increase of the Earth's global temperature. Given the well-known difficulties in the measurement of global temperature and CO2 emissions with high precision, this statistical approach is particularly appealing around climate change science, as it allows the replication of the original time series with the subsequent construction of confidence intervals for the model parameters. To prevent future risks, besides the present urgent decrease of greenhouse gas emissions, it is necessary to stop using the planet and nature as if resources were infinite.

3.
2021 International Conference on Artificial Intelligence and Big Data Analytics, ICAIBDA 2021 ; : 22-27, 2021.
Article in English | Scopus | ID: covidwho-1774635

ABSTRACT

In recent years, companies have widely used sentiment analysis with machine learning classification algorithms to help business decision-making. Sentiment analysis helps evaluate customer opinions on a product in goods or services. Companies need this opinion or sentiment to improve the performance, quality of their products, and customer satisfaction. Machine learning algorithms widely used for sentiment analysis are Naive Bayes Classifier, Maximum Entropy, Decision Tree, and Support Vector Machine. In this study, we propose an approach of sentiment analysis using a very popular method, Extreme Gradient Boosting or XGBoost. XGBoost combines weak learners into an ensemble classifier to build a strong learner. This study will focus on the reviews data of the most popular telemedicine application in Indonesia, Halodoc. This study aims to examine the people's sentiment towards telemedicine applications in Indonesia, especially during the COVID-19 pandemic. We also showed a fishbone diagram to analyze the most factors the users complained about. The data we have are imbalanced;however, XGBoost can perform well with 96.24% accuracy without performing techniques for imbalanced data. © 2021 IEEE.

4.
10th International Conference on Complex Networks and Their Applications, COMPLEX NETWORKS 2021 ; 1016:132-143, 2022.
Article in English | Scopus | ID: covidwho-1627059

ABSTRACT

Identifying and detecting disinformation is a major challenge. Twitter provides datasets of disinformation campaigns through their information operations report. We compare the results of community detection using a classical network representation with a maximum entropy network model. We conclude that the latter method is useful to identify the most significant interactions in the disinformation network over multiple datasets. We also apply the method to a disinformation dataset related to COVID-19, which allows us to assess the repeatability of studies on disinformation datasets. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

5.
Entropy (Basel) ; 23(10)2021 Oct 05.
Article in English | MEDLINE | ID: covidwho-1463583

ABSTRACT

The financial market is a complex system in which the assets influence each other, causing, among other factors, price interactions and co-movement of returns. Using the Maximum Entropy Principle approach, we analyze the interactions between a selected set of stock assets and equity indices under different high and low return volatility episodes at the 2008 Subprime Crisis and the 2020 COVID-19 outbreak. We carry out an inference process to identify the interactions, in which we implement the a pairwise Ising distribution model describing the first and second moments of the distribution of the discretized returns of each asset. Our results indicate that second-order interactions explain more than 80% of the entropy in the system during the Subprime Crisis and slightly higher than 50% during the COVID-19 outbreak independently of the period of high or low volatility analyzed. The evidence shows that during these periods, slight changes in the second-order interactions are enough to induce large changes in assets correlations but the proportion of positive and negative interactions remains virtually unchanged. Although some interactions change signs, the proportion of these changes are the same period to period, which keeps the system in a ferromagnetic state. These results are similar even when analyzing triadic structures in the signed network of couplings.

6.
Entropy (Basel) ; 23(8)2021 Aug 13.
Article in English | MEDLINE | ID: covidwho-1376756

ABSTRACT

Sea level rise and high-impact coastal hazards due to on-going and projected climate change dramatically affect many coastal urban areas worldwide, including those with the highest urbanization growth rates. To develop tailored coastal climate services that can inform decision makers on climate adaptation in coastal cities, a better understanding and modeling of multifaceted urban dynamics is important. We develop a coastal urban model family, where the population growth and urbanization rates are modeled in the framework of diffusion over the half-bounded and bounded domains, and apply the maximum entropy principle to the latter case. Population density distributions are derived analytically whenever possible. Steady-state wave solutions balancing the width of inhabited coastal zones, with the skewed distributions maximizing population entropy, might be responsible for the coastward migrations outstripping the demographic development of the hinterland. With appropriate modifications of boundary conditions, the developed family of diffusion models can describe coastal urban dynamics affected by climate change.

7.
PeerJ ; 9: e10819, 2021.
Article in English | MEDLINE | ID: covidwho-1119625

ABSTRACT

To date, official data on the number of people infected with the SARS-CoV-2-responsible for the Covid-19-have been released by the Italian Government just on the basis of a non-representative sample of population which tested positive for the swab. However a reliable estimation of the number of infected, including asymptomatic people, turns out to be crucial in the preparation of operational schemes and to estimate the future number of people, who will require, to different extents, medical attentions. In order to overcome the current data shortcoming, this article proposes a bootstrap-driven, estimation procedure for the number of people infected with the SARS-CoV-2. This method is designed to be robust, automatic and suitable to generate estimations at regional level. Obtained results show that, while official data at March the 12th report 12.839 cases in Italy, people infected with the SARS-CoV-2 could be as high as 105.789.

8.
Ecol Modell ; 431: 109187, 2020 Sep 01.
Article in English | MEDLINE | ID: covidwho-610019

ABSTRACT

COVID-19 pandemic is a global threat to human health and economy that requires urgent prevention and monitoring strategies. Several models are under study to control the disease spread and infection rate and to detect possible factors that might favour them, with a focus on understanding the correlation between the disease and specific geophysical parameters. However, the pandemic does not present evident environmental hindrances in the infected countries. Nevertheless, a lower rate of infections has been observed in some countries, which might be related to particular population and climatic conditions. In this paper, infection rate of COVID-19 is modelled globally at a 0.5∘ resolution, using a Maximum Entropy-based Ecological Niche Model that identifies geographical areas potentially subject to a high infection rate. The model identifies locations that could favour infection rate due to their particular geophysical (surface air temperature, precipitation, and elevation) and human-related characteristics (CO 2 and population density). It was trained by facilitating data from Italian provinces that have reported a high infection rate and subsequently tested using datasets from World countries' reports. Based on this model, a risk index was calculated to identify the potential World countries and regions that have a high risk of disease increment. The distribution outputs foresee a high infection rate in many locations where real-world disease outbreaks have occurred, e.g. the Hubei province in China, and reports a high risk of disease increment in most World countries which have reported significant outbreaks (e.g. Western U.S.A.). Overall, the results suggest that a complex combination of the selected parameters might be of integral importance to understand the propagation of COVID-19 among human populations, particularly in Europe. The model and the data were distributed through Open-science Web services to maximise opportunities for re-usability regarding new data and new diseases, and also to enhance the transparency of the approach and results.

SELECTION OF CITATIONS
SEARCH DETAIL